248 research outputs found

    Genetic Transfer or Population Diversification? Deciphering the Secret Ingredients of Evolutionary Multitask Optimization

    Full text link
    Evolutionary multitasking has recently emerged as a novel paradigm that enables the similarities and/or latent complementarities (if present) between distinct optimization tasks to be exploited in an autonomous manner simply by solving them together with a unified solution representation scheme. An important matter underpinning future algorithmic advancements is to develop a better understanding of the driving force behind successful multitask problem-solving. In this regard, two (seemingly disparate) ideas have been put forward, namely, (a) implicit genetic transfer as the key ingredient facilitating the exchange of high-quality genetic material across tasks, and (b) population diversification resulting in effective global search of the unified search space encompassing all tasks. In this paper, we present some empirical results that provide a clearer picture of the relationship between the two aforementioned propositions. For the numerical experiments we make use of Sudoku puzzles as case studies, mainly because of their feature that outwardly unlike puzzle statements can often have nearly identical final solutions. The experiments reveal that while on many occasions genetic transfer and population diversity may be viewed as two sides of the same coin, the wider implication of genetic transfer, as shall be shown herein, captures the true essence of evolutionary multitasking to the fullest.Comment: 7 pages, 6 figure

    Large-scale Heteroscedastic Regression via Gaussian Process

    Full text link
    Heteroscedastic regression considering the varying noises among observations has many applications in the fields like machine learning and statistics. Here we focus on the heteroscedastic Gaussian process (HGP) regression which integrates the latent function and the noise function together in a unified non-parametric Bayesian framework. Though showing remarkable performance, HGP suffers from the cubic time complexity, which strictly limits its application to big data. To improve the scalability, we first develop a variational sparse inference algorithm, named VSHGP, to handle large-scale datasets. Furthermore, two variants are developed to improve the scalability and capability of VSHGP. The first is stochastic VSHGP (SVSHGP) which derives a factorized evidence lower bound, thus enhancing efficient stochastic variational inference. The second is distributed VSHGP (DVSHGP) which (i) follows the Bayesian committee machine formalism to distribute computations over multiple local VSHGP experts with many inducing points; and (ii) adopts hybrid parameters for experts to guard against over-fitting and capture local variety. The superiority of DVSHGP and SVSHGP as compared to existing scalable heteroscedastic/homoscedastic GPs is then extensively verified on various datasets.Comment: 14 pages, 15 figure

    Understanding and Comparing Scalable Gaussian Process Regression for Big Data

    Full text link
    As a non-parametric Bayesian model which produces informative predictive distribution, Gaussian process (GP) has been widely used in various fields, like regression, classification and optimization. The cubic complexity of standard GP however leads to poor scalability, which poses challenges in the era of big data. Hence, various scalable GPs have been developed in the literature in order to improve the scalability while retaining desirable prediction accuracy. This paper devotes to investigating the methodological characteristics and performance of representative global and local scalable GPs including sparse approximations and local aggregations from four main perspectives: scalability, capability, controllability and robustness. The numerical experiments on two toy examples and five real-world datasets with up to 250K points offer the following findings. In terms of scalability, most of the scalable GPs own a time complexity that is linear to the training size. In terms of capability, the sparse approximations capture the long-term spatial correlations, the local aggregations capture the local patterns but suffer from over-fitting in some scenarios. In terms of controllability, we could improve the performance of sparse approximations by simply increasing the inducing size. But this is not the case for local aggregations. In terms of robustness, local aggregations are robust to various initializations of hyperparameters due to the local attention mechanism. Finally, we highlight that the proper hybrid of global and local scalable GPs may be a promising way to improve both the model capability and scalability for big data.Comment: 25 pages, 15 figures, preprint submitted to KB
    • …
    corecore